# OSCAR dataset
Mongolian Gpt2
This is a Mongolian text generation model based on the GPT2 architecture, designed to generate fluent Mongolian text.
Large Language Model Other
M
flax-community
75
3
Tavbert He
A Hebrew BERT-style masked language model based on character operations, pre-trained by masking character fragments, similar to SpanBERT.
Large Language Model
Transformers Other

T
tau
116
1
Sinhalaberto
This is a relatively small model trained on the deduplicated OSCAR Sinhala dataset, providing foundational support for the low-resource Sinhala language.
Large Language Model Other
S
keshan
34
1
Gujarati XLM R Base
This model is based on the base variant of XLM-RoBERTa, fine-tuned using Gujarati language and OSCAR monolingual datasets, suitable for Gujarati natural language processing tasks.
Large Language Model
Transformers Other

G
ashwani-tanwar
22
0
Featured Recommended AI Models